# Zero-shot Reasoning
Gemma 3 27b It Uncensored
This model is based on the transformers library, and its specific functions and uses require further information to confirm.
Large Language Model
Transformers

G
braindao
57
2
Llama 3.1 8b DodoWild V2.01
An 8B-parameter language model based on the Llama 3.1 architecture, created by merging multiple models with mergekit, capable of text generation
Large Language Model
Transformers

L
Nexesenex
58
2
Llama 3.1 8b Medusa V1.01
An 8B-parameter language model based on the Llama 3.1 architecture, created by merging multiple specialized models, excelling in text generation tasks.
Large Language Model
Transformers

L
Nexesenex
95
3
Li 14b V0.4 Slerp0.1
This is a 14B-parameter large language model merged using the SLERP method, combining two base models: li-14b-v0.4 and miscii-14b-0218.
Large Language Model
Transformers

L
wanlige
70
7
Deepseek R1 Distill Phi 3 Mini 4k Lorar8 Alpha16 50000samples
MIT
A reasoning model based on Deepseek-R1 knowledge distillation, supporting Chain-of-Thought (CoT) reasoning capabilities
Large Language Model English
D
GPD1
71
4
Llava Llama3
LLaVA-Llama3 is a multimodal model based on Llama-3, supporting joint processing of images and text.
Image-to-Text
L
chatpig
360
1
Mt0 Xxl Mt Q4 K M GGUF
Apache-2.0
This model is a multilingual text generation model converted from bigscience/mt0-xxl-mt to GGUF format via llama.cpp, supporting various language tasks.
Large Language Model Supports Multiple Languages
M
Markobes
14
1
Llava SpaceSGG
Apache-2.0
LLaVA-SpaceSGG is a visual question-answering model based on LLaVA-v1.5-13b, focusing on scene graph generation tasks. It can understand image content and generate structured scene descriptions.
Text-to-Image English
L
wumengyangok
36
0
Sd3 Long Captioner V2
Apache-2.0
A fine-tuned image-to-text generation model based on PaliGemma 224x224 version, specializing in generating detailed descriptions for artistic images
Image-to-Text
Transformers Supports Multiple Languages

S
gokaygokay
135
25
Flan T5 Tsa Prompt Xl
MIT
A Flan-T5-xl fine-tuned model for targeted sentiment analysis, supporting sentiment polarity judgment (positive/negative/neutral) in English text
Text Classification
Transformers English

F
nicolay-r
45
1
Llava V1.6 Mistral 7b Partial Med
Apache-2.0
Llava-v1.6-mistral is a vision-language model (VLM)-based medical visual question answering system capable of understanding and answering questions related to medical images.
Image-to-Text
Transformers

L
rbojja
16
1
Rwkv 5 World 7b
Apache-2.0
RWKV-5 Eagle 7B is a 7B-parameter large language model based on the RWKV architecture, supporting Chinese text generation tasks
Large Language Model
Transformers

R
SmerkyG
19
1
Med BLIP 2 QLoRA
BLIP2 is a vision-language model based on OPT-2.7B, focusing on visual question answering tasks. It can understand image content and answer related questions.
Text-to-Image
M
NouRed
16
1
Litellama 460M 1T
MIT
LiteLlama is a streamlined version of Meta AI's LLaMa 2, featuring only 460 million parameters and trained on 1 trillion tokens as an open-source language model
Large Language Model
Transformers English

L
ahxt
1,225
162
Open Llama 3b V2 Instruct
Apache-2.0
An instruction-fine-tuned language model based on LLaMA 3B v2 architecture, suitable for text generation tasks
Large Language Model
Transformers

O
mediocredev
243
6
Ggml Llava V1.5 7b
Apache-2.0
LLaVA is a vision-language model capable of understanding and generating text related to images.
Image-to-Text
G
y10ab1
44
2
Instructblip Flan T5 Xl 8bit Nf4
MIT
InstructBLIP is a vision instruction tuning model based on BLIP-2, using Flan-T5-xl as the language model, capable of generating descriptions based on images and text instructions.
Image-to-Text
Transformers English

I
Mediocreatmybest
22
0
Instructblip Flan T5 Xl 8bit
MIT
InstructBLIP is the vision-instruction-tuned version of BLIP-2, based on the Flan-T5-xl language model, designed for image-to-text generation tasks.
Image-to-Text
Transformers English

I
Mediocreatmybest
18
1
Blip2 Flan T5 Xxl
MIT
BLIP-2 is a vision-language model that combines an image encoder with a large language model for image-to-text tasks.
Image-to-Text
Transformers English

B
LanguageMachines
22
1
Mplug Owl Bloomz 7b Multilingual
Apache-2.0
mPLUG-Owl is a multilingual vision-language model that supports image understanding and multi-turn dialogue, developed based on the BLOOMZ-7B architecture.
Image-to-Text
Transformers Supports Multiple Languages

M
MAGAer13
29
9
Mplug Owl Llama 7b
Apache-2.0
mPLUG-Owl is a multimodal large language model based on the LLaMA-7B architecture, supporting image understanding and text generation tasks.
Image-to-Text
Transformers English

M
MAGAer13
327
16
Llama 7b Hf Transformers 4.29
Other
LLaMA is an efficient foundational language model based on the Transformer architecture developed by Meta AI, offering a 7B parameter version that supports multiple language processing tasks.
Large Language Model
Transformers

L
elinas
4,660
57
Blip2 Opt 2.7b
MIT
BLIP-2 is a vision-language model that combines an image encoder with a large language model for image-to-text generation tasks.
Image-to-Text
Transformers English

B
Salesforce
867.78k
359
Huhao Deviation Vit Base Patch16 224 In21k
Other
This is a generic model template. Actual information needs to be supplemented based on the specific model page content
Large Language Model
Transformers

H
HaoHu
36
1
P6 Model
Other
Specific information about this model is unavailable. Please refer to the relevant documentation for details
Large Language Model
Transformers

P
lialbose
28
0
Flan T5 Xxl
Apache-2.0
FLAN-T5 is an instruction-fine-tuned language model based on T5, achieving superior performance through fine-tuning on over 1,000 multilingual tasks with the same parameter count
Large Language Model Supports Multiple Languages
F
google
157.41k
1,238
Biosyn Biobert Bc2gn
A large language model based on Transformer architecture, supporting text understanding and generation tasks in both Chinese and English
Large Language Model
Transformers

B
dmis-lab
32
0
Biosyn Sapbert Bc2gn
This is a versatile large language model capable of understanding and generating natural language text
Large Language Model
Transformers

B
dmis-lab
857
1
Xglm 1.7B
MIT
XGLM-1.7B is a multilingual autoregressive language model with 1.7 billion parameters, trained on a diverse and balanced corpus of 500 billion subword tokens.
Large Language Model
Transformers Supports Multiple Languages

X
facebook
1,514
19
Robertanlp
A versatile large language model capable of handling various natural language tasks
Large Language Model
R
subbareddyiiit
26
0
Xglm 4.5B
MIT
XGLM-4.5B is a multilingual autoregressive language model with 4.5 billion parameters, trained on a balanced corpus of 134 languages.
Large Language Model
Transformers Supports Multiple Languages

X
facebook
78
20
Agri Gpt2
This is a versatile large language model capable of handling various natural language processing tasks
Large Language Model
A
Mamatha
15
1
Featured Recommended AI Models